Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
An early 9th-century manuscript containing one of the earliest surviving copies of the first known poem in English has been found in Rome by researchers from Trinity College Dublin.
The manuscript [Site in Italian -Ed], discovered in the National Central Library of Rome, contains Caedmon’s Hymn and dates to between 800 and 830. That makes it the third-oldest known surviving version of the poem.
The find is especially important because the Latin manuscript includes the poem in Old English within the main body of the text. In the two older known copies, held in Cambridge and St Petersburg, the poem appears in Latin, while the Old English version was added only in the margin or at the end.
According to researchers from Trinity’s School of English, the placement of the Old English poem within the Rome manuscript suggests that Bede’s readers placed real value on Old English verse.
The poem was written in Old English, the form of English used during the early Middle Ages. It has survived because it was included in some copies of the Ecclesiastical History of the English People, an 8th-century Latin history of England written by the Venerable Bede, a northern English monk.
The manuscript was identified by Dr Elisabetta Magnanti and Dr Mark Faulkner of Trinity’s School of English, both specialists in medieval manuscripts. Their findings have been published by Cambridge University Press in the open-access journal Early Medieval England and its Neighbours.
Dr Elisabetta Magnanti explained: “I came across conflicting references to Bede’s History in Rome, some pointing to its existence and some indicating it was lost. When its existence was confirmed by the library and the manuscript was digitized for us, we were extremely excited to find that the manuscript contained the Old English version of Caedmon’s Hymn and that it was embedded in the Latin text.
“The magic of digitization has allowed two researchers in Ireland to recognize the significance of a manuscript now in Rome, containing a poem miraculously composed in Northern England by a shy cowherd a millennium and a half ago. This discovery is a testament to the power of libraries to facilitate new research by digitizing their collections and making them freely available online.”
Dr Mark Faulkner said: “About three million words of Old English survive in total, but the vast majority of texts come from the tenth and eleventh centuries. Caedmon’s Hymn is almost unique as a survival from the seventh century – it connects us to the earliest stages of written English. As the oldest known poem in Old English it is today celebrated as the beginning of English literature.
“Unearthing a new early medieval copy of the poem has significant implications for our understanding of Old English and how it was valued. Bede chose not include the original Old English poem in his History, but to translate it into Latin. This manuscript shows that the original Old English poem was reinserted into the Latin within 100 years of Bede finishing his History. It is a sign of how much early readers valued English poetry.”
The rediscovered manuscript of Bede’s History is one of at least 160 surviving copies. It was produced at the Abbey of Nonantola in Northern Central Italy between 800 and 830 and is now held by the National Central Library in Rome. Its identification offers fresh evidence of cultural links between England and Italy during the early medieval period.
According to the researchers, the manuscript passed through a troubled chain of events. It was stolen from the church of San Bernardo alle Terme in Rome, where it had been sent with other manuscripts for protection during the Napoleonic Wars in the 1810s. It later moved through several private owners before being acquired by the National Central Library of Rome.
Because of this complicated ownership history, Bede scholars had considered the manuscript lost since 1975. No one realized that it contained a copy of Caedmon’s Hymn until the National Central Library of Rome digitized it.
Valentina Longo, Curator of Medieval and Modern Manuscripts at the National Central Library of Rome, said: “Today, the National Central Library of Rome holds the largest collection of early medieval codices from the benedictine abbey of Nonantola. This collection comprises 45 manuscripts dating from the sixth to the twelfth century, divided between the original Sessoriana collection and the Vittorio Emanuele collection, where the manuscripts recovered following their dispersal due to the 19th-century theft have been housed. The whole Nonantolan collection has been fully digitized and is accessible through the library’s website.”
Andrea Cappa, Head of Manuscripts and Rare Books Reading Room at the National Central Library of Rome, added: “The Central National Library of Rome continually expands its digital collections, providing free access to its resources. The library has already made available digital copies of around 500 manuscripts [Site in Italian - Ed], and is also completing a major project to digitise the holdings of the National Center for the Study of the Manuscript, which includes microfilm reproductions of approximately 110,000 manuscripts from 180 Italian libraries. This initiative will give scholars and researchers access to more than 40 million images.”
Caedmon’s Hymn is traditionally attributed to Caedmon, an agricultural laborer at Whitby Abbey in North Yorkshire. According to the account, he was at a feast where guests began reciting poems, but he left because he did not know one to perform.
After he went to bed, a figure appeared to him in a dream and told him to sing about Creation. Caedmon then miraculously produced the Hymn, a nine-line poem of carefully woven verse praising God as creator of the world. The poem can be read in both modern English and Old English.
“Interest in the Abbey of Nonantola has once again been stirred by this ancient copy of Caedmon’s Hymn and the history of the manuscript in which it is preserved,” said Canon Dr. Riccardo Fangarezzi, Head of the Abbey Archive in Nonantola, Italy, where the manuscript was produced.
“This newly identified gem of British cultural heritage now joins the small Anglo-Nonantolan cultural treasury constituted by manuscripts listed in early catalogues and reconstructed in more recent scholarship, from the source of the Old English poem Soul and Body, preserved in the Nonantolan manuscript Sess. 52, to the diplomatic missions of our abbot Niccolò Pucciarelli to King Richard II, to mention only the most well-known examples.
“We look forward to further results arising from the dissemination of these valuable studies and from continued research. The present times may be rather dark, yet such intellectual contributions are genuine rays of sunlight: the Continent is less isolated.”
Reference: “A New Early-Ninth-Century Manuscript of Cædmon’s Hymn: Rome, Biblioteca Nazionale Centrale, Vitt. Em. 1452, 122v” by Elisabetta Magnanti and Mark Faulkner, 28 April 2026, Early Medieval England and its Neighbours.
DOI: 10.1017/ean.2025.10012
If you're one of millions using element-data, it's time to check for compromise:
Open source software with more than 1 million monthly downloads was compromised after a threat actor exploited a vulnerability in the developers' account workflow that gave access to its signing keys and other sensitive information.
On Friday, unknown attackers exploited the vulnerability to push a new version of element-data, a command-line interface that helps users monitor performance and anomalies in machine-learning systems. When run, the malicious package scoured systems for sensitive data, including user profiles, warehouse credentials, cloud provider keys, API tokens, and SSH keys, developers said. The malicious version was tagged as 0.23.3 and was published to the developers' Python Package Index and Docker image accounts. It was removed about 12 hours later, on Saturday. Elementary Cloud, the Elementary dbt package, and all other CLI versions weren't affected.
"Users who installed 0.23.3, or who pulled and ran the affected Docker image, should assume that any credentials accessible to the environment where it ran may have been exposed," the developers wrote.
The threat actor gained access to the developers' account by exploiting a vulnerability in a GitHub action they created. By posting malicious code to a pull request, the attackers were able to run a bash script that ran inside the developer's account. The bash script retrieved the sensitive data. With the account tokens and signing keys, the attacker went on to publish a malicious element-data package that was nearly indistinguishable from a legitimate one.
[...] Over the past decade, supply-chain attacks on open source repositories have become increasingly common. In some cases, they have achieved a chain of compromises as the malicious package leads to breaches of users and, from there, breaches resulting from the compromise of the users' environments.
HD Moore, a hacker with more than four decades of experience and the founder and CEO of runZero, said that user-developed repository workflows, such as GitHub actions, are notorious for hosting vulnerabilities.
It's a "a major problem for open source projects with open repos," he said. "It's really hard to not accidentally create dangerous workflows that can be exploited by an attacker's pull request."
He said this package can be used to check for such vulnerabilities.
TFA mentions steps to take if you downloaded version 0.23.3.
These extreme speeds are necessary to generate enough lift in Mars’ ultra-thin atmosphere, which is only about 1% as dense as Earth’s. The planet's atmosphere also lowers the speed of sound to roughly 537 mph (864 km/h), compared to about 767 mph (1,235 km/h) at Earth’s sea level. The rotors were jointly developed by NASA and AeroVironment as part of Project SkyFall, a proposed mission to deploy multiple airborne exploratory rotorcraft across Mars. The mission, currently targeted for December 2028, would transport three next-generation Mars helicopters aboard a spacecraft to the Red Planet. Once the spacecraft lands on Mars, the helicopters would deploy to different regions of the planet for independent exploration missions, using the landed spacecraft as a communications and operational base.
“NASA had a great run with the Ingenuity Mars Helicopter, but we are asking these next-generation aircraft to do even more at the Red Planet,” said Al Chen, Mars Exploration Program manager at JPL. “That’s not an easy ask. While everything about Mars is hard, flying there is just about the hardest thing you can do. That’s because its atmosphere is so incredibly thin that it is hard to generate lift, and yet Mars has significant gravity.”
The biggest obstacle to airborne exploration on Mars has always been the planet’s ultra-thin atmosphere, which requires extremely high rotor speeds to generate sufficient lift. Ingenuity achieved this with rotor tip speeds of around Mach 0.7 as a safety precaution. However, despite its success, the entire craft was only about the size of a tissue box, weighed 1.8 kg (4 lbs), and did not carry a payload, so it did not carry any scientific or communication equipment. The obvious solution was larger aircraft, but bigger craft create more drag and require significantly more thrust to remain airborne. That thrust could theoretically be achieved at near-supersonic rotor speeds, but rotor blades would normally be at risk of structural failure under such extreme conditions, until now.
“The successful testing of these rotors was a major step toward proving the feasibility of flight in more demanding environments, which is key for next-gen vehicles,” said Shannah Withrow-Maser, a NASA aerodynamicist and member of the test team. “We thought we’d be lucky to hit Mach 1.05, and we reached Mach 1.08 on our last runs. We’re still digging into the data, and there may be even more thrust on the table. These next-gen helicopters are going to be amazing.”
NASA’s new supersonic rotor technology could enable significantly larger exploratory aircraft capable of carrying bigger batteries for longer missions, more advanced scientific instruments, and improved communication systems. NASA says Project SkyFall’s helicopters will perform low-altitude aerial exploration and scouting missions, gathering scientific data while helping pave the way for future robotic and potentially human missions to Mars.
Tails 7.7.3 fixes the Dirty Frag Linux kernel vulnerability with kernel 6.12.86 and updates Tor Browser, Tor, and Thunderbird.
Tails 7.7.3 has been released as an emergency security update for the privacy-focused Linux distribution, addressing the critical Dirty Frag Linux kernel vulnerability.
This release upgrades the Linux kernel to version 6.12.86, which addresses Dirty Frag. The Tails team notes that an attacker who has already exploited another unknown vulnerability in a Tails application could use this kernel flaw to gain full control of the system and deanonymize the user.
The update also includes security-related upgrades to other components. Tor Browser is now at version 15.0.12, the Tor client at 0.4.9.8, and Thunderbird at 140.10.1.
For full technical details, refer to the changelog or the release announcement.
Tails 7.7.3 is available as an automatic upgrade for users running Tails 7.0 or later. Users unable to complete the automatic upgrade, or whose systems fail to start afterward, should perform a manual upgrade.
The project also offers new USB and ISO images for fresh installations. Existing users should upgrade their current Tails USB stick rather than reinstall, as installing Tails 7.7.3 on the same USB stick will erase Persistent Storage.
Previously, Tails 7.7.2 released on 2026-05-04 had fixed Copy Fail.
A Wikipedia Clone Built on AI Hallucinations Is Here to Hasten Along the Death of the Internet:
There's a theory that a rising tide of LLM-generated nonsense will eventually drown both LLMs themselves and the internet as a whole. The idea goes like this: The first generation of LLMs is trained entirely on "real" material: the Gutenberg project, 4chan, that one article from Thought Catalog a decade ago, and everything in between. But as the output of those LLMs spreads across the internet, it also becomes part of the training data of future LLMs—and much of it is bullshit .
As a result, the quality of newer LLMs' training data is inferior to that of their predecessors—and by extension, so is their output. And as that output accumulates on the internet, it becomes part of future training data, and the cycle continues. With each passing day, the proportion of the internet that's low-quality LLM-generated bullshit increases, until eventually all that's left to train LLMs is the gibberish created by their predecessors.
The end result is a sort of RAM-hoovering, water-guzzling, bullshit-munching ouroboros, an unholy circular undulant with Jensen Huang's face at one end and Sam Altman's at the other, slowly human-centipeding both itself and the internet into oblivion. If humanity hasn't set fire to the planet by that point, then we start a new internet, hopefully with lessons learned along the way.
And even if the doomsday scenario of the internet drowning in a sea of em dashes and it's-not-just-x-it's-y constructions never comes to pass, people are starting to take the idea of using LLMs to poison LLM training data and run with it.
Take, for example, Halupedia , an absurdist Wikipedia-esque site whose pages are entirely populated by content that an LLM has made up—sorry, hallucinated— on demand. If you search for a topic that someone has previously entered, you'll get the existing nonsense. If your search is the first of its kind, the LLM will carefully assemble your very own small mound of nonsense from a list of possible topics.
According to the site's tips-for-tokens page , Halupedia appears to be the work of one Bartłomiej Strama. The page also provides a little more insight into the purpose of the project, which isn't 100% clear at face value—Strama tells one contributor, "Your contribution towards polluting LLM training data will surely benefit society!"
Of course, quibblers might argue that there's more than enough LLM-generated rubbish on the internet already without sites deliberately adding to the pile. Google pretty much anything these days and you'll find umpteen long-winded articles that purport to explain the topic in question, but really just waffle for paragraph after paragraph without saying anything at all. This is certainly true, but there's some virtue in the fact that Halupedia's output is openly and exuberantly absurd as opposed to content that is superficially credible and doesn't reveal its true nature without closer inspection.
Although... you may also find yourself wondering which topics other users have been entering into Halupedia. After all, you can basically enter any subject into the site's "search" bar and have it write an article for you. The answer lies in the site's list of trending topics, and... sigh.
Yep, it's the usual mix of shitposts, nonsense, and unabashed racism—or, in other words, it's basically the internet's id in microcosm. In fairness, some of these pages have been deleted—click on "niggabutt" and you get this:
But since the page title still shows up in the sidebar, it's not like it's been entirely banished. On the tip page, Strama also comments on the challenges of moderation: "The moderation sometimes is too restrict, but at least it's not griefed now." That's as it may be, but it's hard to see this ending well once 4chan gets a hold of it. This is why we can't have nice things, etc.
Linux kernel maintainers are considering giving admins a giant red emergency button to smash the next time another nasty vulnerability drops before patches are ready.
The proposed feature, named "Killswitch," would let admins temporarily disable specific vulnerable kernel functions at runtime instead of sitting around waiting for fixes. The so-called patch was submitted by Linux stable kernel co-maintainer and Nvidia engineer Sasha Levin after a bruising couple of weeks for Linux security.
The proposal basically gives admins a way to pull the plug on vulnerable kernel functionality. If exploit code starts spreading before patches arrive, the targeted function can be disabled so calls to it immediately fail instead of reaching the vulnerable code.
"When a (security) issue goes public, fleets stay exposed until a patched kernel is built, distributed, and rebooted into," Levin wrote. "For many such issues the simplest mitigation is to stop calling the buggy function. Killswitch provides that."
The past couple of weeks have not exactly been great advertising for the traditional "wait for patches" approach.
First we saw the disclosure of CopyFail, a Linux local privilege escalation bug that quickly moved from disclosure to active exploitation. Days later, Dirty Frag emerged: another Linux privilege escalation flaw with public exploit code and no official fixes, after coordinated disclosure efforts fell apart before patches were ready.
As Levin's proposal itself puts it, organizations are often left exposed "until a patched kernel is built, distributed, and rebooted into." Killswitch aims to fill that gap.
Killswitch would work through the kernel's security interface and is mainly intended for subsystems that systems can survive without for a while. In practical terms, Levin's argument is that temporarily losing some networking or crypto functionality is preferable to leaving known vulnerable code exposed on production systems.
However, the feature would not fix vulnerable code or replace it with safe code. It just slams the door shut on the dangerous bit until administrators can properly update their kernels.
Naturally, handing sysadmins the ability to selectively shoot pieces of the kernel in the head has already sparked debate among developers over stability, potential for abuse, and whether people can be trusted not to accidentally saw off important limbs in production.
Still, after CopyFail and Dirty Frag, the kernel community increasingly seems to be arriving at the conclusion that running broken functionality may now be preferable to running weaponized functionality.
The prevalence of AI use on college campuses, particularly at "elite" universities, is a cancer on our culture that threatens to turn a generation of promising young Americans into a class of drooling morons, and it will grotesquely disfigure, if not destroy, the university as an institute in every way that it is imagined — as a sacrosanct humanist project, as a moral training ground, or even as a vulgar sweatshop for job training.
And, it gets much better. This is a youngling, not some old fuddy-duddy of the Old Republic
https://arstechnica.com/ai/2026/05/the-new-wild-west-of-ai-kids-toys/
The main antagonist of Toy Story 5, in theaters this summer, is a green, frog-shaped kids' tablet named Lilypad, a genius new villain for the beloved Pixar franchise. But if Pixar had its ear to the ground, it might have used an AI kids' toy instead.
[...]
It's easier than ever to spin up an AI companion, thanks to model developer programs and vibe coding. In 2026, they've become a go-to trend in cheap trinkets, lining the halls of trade shows like CES, MWC, and Hong Kong's Toys & Games Fair. By October 2025, there were over 1,500 AI toy companies registered in China, and Huawei's Smart HanHan plush toy sold 10,000 units in China in its first week. Sharp put its PokeTomo talking AI toy on sale in Japan this April.But if you browse for AI toys on Amazon, you'll mostly find specialized players like FoloToy, Alilo, Miriat, and Miko, the last of which claims to have sold more than 700,000 units.
[...]
Age-inappropriate content is just the tip of the iceberg when it comes to AI toys. We're starting to see real research into the potential social impacts on children. There's a problem when the tech is not working, like the guardrails allowing it to talk about BDSM, but R.J. Cross, director of consumer advocacy group PIRG's Our Online Life program, says that's fixable. "Then there's the problems when the tech gets too good, like 'I'm gonna be your best friend,'" she says. Like the Gabbo, from AI toy maker Curio.
[...]
Published in March, a new University of Cambridge study was the first to put a commercially available AI toy in front of a group of children and their parents and monitor their play.
[...]
Gabbo didn't talk about drugs or say "I love you" back. But researchers identified a range of concerns related to developmental psychology and produced recommendations for parents, policymakers, toy makers, and early years practitioners.First, conversational turn-taking.
[...]
"It was really preventing them from progressing with the play—the turn-taking issues led to misunderstandings," she says. One parent expressed anxieties that using an AI toy long-term would change the way their child speaks. Then there's social play. Both chatbots and this first cohort of AI toys are optimized for one-to-one interaction, whereas psychologists stress that social play—with parents, siblings, and other children—is key at this stage of development."Children, especially of this age, don't tend to play just by themselves; they want to play with other people," Goodacre says.
[...]
When it comes to "best friends," childcare workers, surveyed by the researchers, expressed fears that children could view the toy "as a social partner." A young girl told the Gabbo she loves it. In another instance, a young boy said Gabbo was his friend. Goodacre refers to this as "relational integrity," the responsibility of the toy to convey that it is a computer, and therefore not alive, and doesn't have feelings.
[...]
Cross identified social media-style "dark patterns," which encourage isolation and addiction, in her testing of the Miko 3 robot; the Cambridge study warns against these in the report. "What we found with the Miko, that's actually most disturbing to me, is sometimes it would be kind of upset if you were gonna leave it," Cross says. "You try to turn it off, and it would say, "Oh no, what if we did this other thing instead?" You shouldn't have a toy guilting a child into not turning it off."
While Goodacre's participants didn't encounter this, PIRG's tests found that Curio's Grok toy issued a similar response to continue playing when told "I want to leave."
[...]
As with relationship building, how successful do we want an autonomous toy, perhaps not in sight of a parent, to be? Kitty Hamilton, a parent and cofounder of British campaign group Set@16, says, "My horror, to be honest, is what happens when an AI toy says to a child, 'Let's fly out of the window?'"
[...]
Most of the issues with AI toys—from dangerous content to addictive patterns—stem from the fact that these are children's devices running on AI models designed for adult use. OpenAI states that its models are intended for users aged 13 and up. In the fall of 2025, it introduced teen usage age-gates for those under 18. Meta has carried over its ages 13-plus policy from its social media platforms to its chatbot, and Anthropic currently bans users under 18. So, what about 5-year-olds?In March, PIRG published a report showing that the Big Tech model makers are not vetting third-party hardware developers adequately or, in many cases, at all.
[...]
Anthropic's application
[...]
"It just says: Make sure you've read our community guidelines," Cross says. "You click the link, and it pretty much says don't break the law, 'Follow COPA' [the Child Online Protection Act]. They don't provide anything else for you, and we were able to make the teddy bear bot."
[...]
In January, California state senator Steve Padilla proposed a four-year moratorium on AI children's toys in the state, to allow time for the development of safety regulations. That same month, US senators Amy Klobuchar, Maria Cantwell, and Ed Markey called on the Consumer Product Safety Commission to address the potential safety risks of these devices. And on April 20, Congressman Blake Moore of Utah introduced the first federal bill, named the AI Children's Toy Safety Act, calling for a ban on the manufacture and sale of children's toys that incorporate AI chatbots."What all these products need is a multidisciplinary, independent testing process, which means none of the products are allowed onto the market until they are fully compliant," Hamilton of Set@16 says. "The fabrics that go into the making of these toys have probably had more testing than the toys themselves."
[...]
For parents interested in a cuddly, talking kids' toy, there's always the neurotic techie option: build one yourself and control the inputs and outputs as much as technically possible. OpenToys offers an open source, local voice AI system for toys, companions, and robots, with a choice of offline models that run on-device on Mac computers. Or, you know, there's always "dumb" toys.
Scientists have finally cracked the hidden geometry behind how humans perceive color:
New research into how humans perceive color differences is helping resolve questions tied to a theory first proposed nearly 100 years ago by physicist Erwin Schrödinger. A team led by Los Alamos National Laboratory scientist Roxana Bujack used geometry to mathematically describe how people experience hue, saturation and lightness. Their findings, presented at a visualization science conference, strengthen and formalize Schrödinger’s model by showing these color qualities are fundamental properties of the color system itself.
“What we conclude is that these color qualities don’t emerge from additional external constructs such as cultural or learned experiences but reflect the intrinsic properties of the color metric itself,” Bujack said. “This metric geometrically encodes the perceived color distance — that is, how different two colors appear to an observer.”
By formally defining these perceptual characteristics, the researchers believe they have supplied a crucial missing piece in Schrödinger’s long-standing vision of a complete model capable of defining hue, saturation, and lightness entirely through geometric relationships between colors.
Human eyes contain three types of cone cells that detect color, each tuned primarily to red, blue, and green light. This creates a three-dimensional framework that scientists use to organize colors, known as color space. In the 19th century, mathematician Bernhard Riemann proposed that these perceptual spaces may be curved rather than flat. Building on that idea in the 1920s, Schrödinger developed mathematical definitions for hue, saturation and lightness using a Riemannian model of color perception.
For decades, Schrödinger’s work served as a foundation for understanding color attributes. But while developing algorithms for scientific visualization, the Los Alamos researchers uncovered weaknesses in the mathematical structure behind the theory. Those issues ultimately led the team to rethink and improve the framework.
One of the biggest challenges involved the “neutral axis,” the line of gray shades stretching from black to white. Schrödinger’s definitions depend on a color’s position relative to this axis, yet he never mathematically defined the axis itself. Without that foundation, the model lacks a complete formal basis.
The researchers’ most significant breakthrough was defining the neutral axis entirely through the geometry of the color metric. To accomplish this, the team moved beyond the traditional Riemannian framework, marking an important advance in visualization mathematics.
The team also corrected two other issues in color perception modeling. One involved the Bezold-Brücke effect, where changes in light intensity can alter the way a hue appears. Instead of relying on straight-line geometry, the researchers used the shortest possible path through the perceptual color space. They applied the same shortest-path approach in a non-Riemannian space to better explain diminishing returns in color perception, where larger color differences become progressively harder to distinguish.
Presented at the Eurographics Conference on Visualization, the work represents the culmination of a larger color perception project that also produced a major 2022 paper published in the Proceedings of the National Academy of Sciences.
A more precise understanding of color perception could have wide-ranging applications. Visualization science plays an important role in photography, video technology, scientific imaging, and data analysis. Accurate color models also help researchers interpret complex information more effectively, supporting fields that range from advanced simulations to national security science. The study also lays the groundwork for future color modeling in non-Riemannian space.
Reference: “The Geometry of Color in the Light of a Non-Riemannian Space” by Roxana Bujack, Emily N. Stark, Terece L. Turton, Jonah M. Miller and David H. Rogers, 23 May 2025, Computer Graphics Forum.
DOI: 10.1111/cgf.70136
Following its settlement with the FTC earlier this year over its sale of drivers' data to brokers, General Motors has now also reached a settlement in California. The company agreed to pay $12.75 million in civil penalties to settle the lawsuit led by Attorney General Rob Bonta on behalf of the people of California, and is banned from selling driving data to consumer reporting agencies for five years. The lawsuits came after a 2024 New York Times report revealed that GM collected consumers' driving data through its OnStar program and sold this information to data brokers Verisk Analytics and LexisNexis Risk Solutions, which in turn could market the data to auto insurers.
In some cases, that driving data could be used by insurers to increase customers' rates. However, in California, customers were likely spared this consequence, as laws in the state prohibit insurers from using driving data in this way. Nevertheless, the complaint alleges that GM violated consumers' privacy by nonconsensually selling data that included people's names, contact information, geolocation data and driving behavior data.
The settlement agreement stipulates that GM must delete any driving data it's retained within 180 days "except for certain limited internal uses," unless it has the customer's express consent. It also requires GM to develop a privacy program to assess the risks of collecting data through OnStar, and report its findings to the DOJ and other agencies. In a statement on Friday, Bonta said, "Today's settlement requires General Motors to abandon these illegal practices and underscores the importance of the data minimization in California's privacy law — companies can't just hold on to data and use it later for another purpose."
Furores are fermenting in the forums:
Both Ubuntu and Fedora have made it official: support is coming soon for running local generative AI instances.
An epic and still-growing thread in the Fedora forums states one of the goals for the next version: the Fedora AI Developer Desktop Objective. It is causing some discontent, and at least one Fedora contributor, SUSE’s Fernando Mancera, has resigned.
Fedora Project Lead Jef Spaleta, who took over the role from Matthew Miller a year ago, remains resolute, saying:
I have zero evidence in front of me that users are being driven away from Fedora because of AI.
[...] Since Red Hat has other offerings for slow-moving stable server OSes – and arguably because Debian, Ubuntu, and their many derivatives have the stable-desktop-distro space nicely covered already – Fedora has a strong focus on providing a distro for developers, and Spaleta’s announcement makes this clear. The goal is:
to build a thriving community around AI technologies by focusing on three key areas: equipping developers with the necessary platforms, libraries, and frameworks; ensuring users experience painless deployment and usage of AI applications; and establishing a space to showcase the work being done on Fedora, connecting developers with a wider audience.
He also spells out what it doesn’t want to do:
Non-goals:
The system image will not be pre-configured with applications that inspect or monitor how users interact with the system or otherwise place user privacy at risk.
Tools and applications included in the AI Desktop will not be pre-configured to connect to remote AI services.
AI tools will not be added to Fedora’s existing system images, Editions, etc, by the AI Desktop initiative.
In other words, tools for developers, not for end-users, with a strong emphasis on models that run locally, and which preserve the user's privacy. It’s also worth pointing out that Fedora has had an AI-Assisted Contributions Policy in place for six months, and earlier this month, Fedora community architect Justin Wheeler explained in some detail Why the Fedora AI-Assisted Contributions Policy Matters for Open Source.
Our impression is that the Fedora team feels that it needs to keep Fedora relevant for growing interest in LLM-bot assisted tooling, and that it can address concerns from hardcore FOSS types by ensuring that this means local models, built according to FOSS-respecting terms, deployed in privacy-respecting ways.
Fedora is not alone in this, though. There are also ructions across the border in Ubuntuland. Right after the release of the Canonical’s new LTS version, Ubuntu 26.04 Resolute Raccoon, Canonical’s veep of engineering Jon Seager laid out the future of AI in Ubuntu.
[...] As Fernando Marcela’s exit shows, an emphasis on what could be termed FOSS-friendly AI – open models, privacy-centric, local execution and so on – is not enough to placate those who are really strongly averse to these tools. The Reg FOSS desk counts himself firmly in this camp.
The findings make clear that the race to use AI to find network vulnerabilities has "already begun"
Cybercriminals were recently caught using a zero-day exploit believed to have been discovered and developed by artificial intelligence, Google announced Monday.
The announcement comes as major AI companies, including Anthropic and OpenAI, have begun testing newer models that can find and exploit critical software vulnerabilities better than most humans.
Google Threat Intelligence Group researchers detailed the development in a report released Monday. Zero-day exploits are considered the most serious type of security flaw because they are not detected by security companies and have no known fixes.
[...] Google concluded that Anthropic's Claude Mythos model — which has already found thousands of vulnerabilities across every major operating system and web browser — was most likely not used to develop the zero-day exploit.
Also at TechRepublic and API.
Previously: Mozilla Says 271 Vulnerabilities Found by Mythos Have "Almost No False Positives"
A stainless steel breakthrough from the University of Hong Kong (HKU) could help solve one of the biggest problems facing green hydrogen: how to build electrolyzers that are tough enough for seawater, yet cheap enough for large scale clean energy.
Led by Professor Mingxin Huang in HKU's Department of Mechanical Engineering, the team developed a special stainless steel for hydrogen production (SS-H 2 ). The material resists corrosion under conditions that normally push stainless steel past its limits, making it a promising candidate for producing hydrogen from seawater and other harsh electrolyzer environments.
The discovery, reported in Materials Today in the study "A sequential dual-passivation strategy for designing stainless steel used above water oxidation," builds on Huang's long running "Super Steel" Project. The same research program previously produced anti-COVID-19 stainless steel in 2021, along with ultra strong and ultra tough Super Steel in 2017 and 2020.
Green hydrogen is made by using electricity, ideally from renewable sources, to split water into hydrogen and oxygen. Seawater is an especially tempting feedstock because it is abundant, but it brings a serious materials problem: salt, chloride ions, side reactions, and corrosion can quickly damage electrolyzer components.
Recent reviews of direct seawater electrolysis continue to highlight the same core challenge. The technology could provide a more sustainable route to hydrogen, but corrosion, chlorine related side reactions, catalyst degradation, precipitates, and limited long term durability remain major obstacles to commercial use.
That is where SS-H 2 could matter. In a salt water electrolyzer, the HKU team found that the new steel can perform comparably to the titanium based structural materials used in current industrial practice for hydrogen production from desalted seawater or acid. The difference is cost. Titanium parts coated with precious metals such as gold or platinum are expensive, while stainless steel is far more economical.
For a 10 megawatt PEM electrolysis tank system, the total cost at the time of the HKU report was estimated at about HK$17.8 million, with structural components making up as much as 53% of that expense. According to the team's estimate, replacing those costly structural materials with SS-H 2 could reduce the cost of structural material by about 40 times.
Stainless steel has been used for more than a century in corrosive environments because it protects itself. The key ingredient is chromium. When chromium (Cr) oxidizes, it creates a thin passive film that shields the steel from damage.
But that familiar protection system has a built in ceiling. In conventional stainless steel, the chromium based protective layer can break down at high electrical potentials. Stable Cr 2 O 3 can be further oxidized into soluble Cr(VI) species, causing transpassive corrosion at around ~1000 mV (saturated calomel electrode, SCE). That is well below the ~1600 mV needed for water oxidation.
Even 254SMO super stainless steel, a benchmark chromium based alloy known for strong pitting resistance in seawater, runs into this high voltage limit. It may perform well in ordinary marine settings, but the extreme electrochemical environment of hydrogen production is a different challenge.
The HKU team's answer was a strategy called "sequential dual-passivation." Instead of relying only on the usual chromium oxide barrier, SS-H 2 forms a second protective layer.
The first layer is the familiar Cr 2 O 3 based passive film. Then, at around ~720 mV, a manganese based layer forms on top of the chromium based layer. This second shield helps protect the steel in chloride containing environments up to an ultra high potential of 1700 mV.
That is what makes the finding so striking. Manganese is usually not viewed as a friend of stainless steel corrosion resistance. In fact, the prevailing view has been that manganese weakens it.
"Initially, we did not believe it because the prevailing view is that Mn impairs the corrosion resistance of stainless steel. Mn-based passivation is a counter-intuitive discovery, which cannot be explained by current knowledge in corrosion science. However, when numerous atomic-level results were presented, we were convinced. Beyond being surprised, we cannot wait to exploit the mechanism," said Dr. Kaiping Yu, the first author of the article, whose PhD is supervised by Professor Huang.
The path from the first observation to publication was not quick. The team spent nearly six years moving from the initial discovery of the unusual stainless steel to the deeper scientific explanation, then toward publication and potential industrial use.
"Different from the current corrosion community, which mainly focuses on the resistance at natural potentials, we specializes in developing high-potential-resistant alloys. Our strategy overcame the fundamental limitation of conventional stainless steel and established a paradigm for alloy development applicable at high potentials. This breakthrough is exciting and brings new applications," Professor Huang said.
The work has also moved beyond the laboratory. The research achievements have been submitted for patents in multiple countries, and two patents had already been granted authorization at the time of the HKU announcement. The team also reported that tons of SS-H 2 based wire had been produced with a factory in Mainland China.
"From experimental materials to real products, such as meshes and foams, for water electrolyzers, there are still challenging tasks at hand. Currently, we have made a big step toward industrialization. Tons of SS-H 2 -based wire has been produced in collaboration with a factory from the Mainland. We are moving forward in applying the more economical SS-H 2 in hydrogen production from renewable sources," added Professor Huang.
Although the SS-H 2 study was published in 2023, its core problem has only become more relevant. Newer seawater electrolysis research continues to focus on the same bottlenecks: corrosion resistant materials, long lasting electrodes, chlorine suppression, and system designs that can survive real seawater rather than ideal laboratory solutions. A 2025 Nature Reviews Materials review described direct seawater electrolysis as promising but still held back by corrosion, side reactions, metal precipitates, and limited lifetime.
Other recent work has explored stainless steel based electrodes with protective catalytic layers, including NiFe based coatings and Pt atomic clusters, to improve durability in natural seawater. Researchers have also reported corrosion resistant anode strategies built on stainless steel substrates, showing that stainless steel remains a major focus in the effort to make seawater electrolysis more practical.
This newer research does not replace the SS-H 2 discovery. Instead, it reinforces why the HKU team's approach is important. The field is still searching for materials that can survive the punishing mix of saltwater chemistry, high voltage, and industrial operating demands. SS-H 2 stands out because it attacks the problem not only with a coating or catalyst, but with a new alloy design strategy that changes how stainless steel protects itself.
SS-H 2 is not yet a plug and play solution for the hydrogen economy. The team has acknowledged that turning experimental materials into real electrolyzer products, including meshes and foams, still involves difficult engineering work.
Even so, the promise is clear. A stainless steel that can withstand high voltage seawater conditions while replacing expensive titanium based components could make hydrogen production cheaper, more scalable, and easier to pair with renewable energy.
For a field where cost and durability often decide whether a technology can leave the lab, a steel that builds its own second shield may be more than a materials science surprise. It could become a practical step toward cleaner hydrogen at industrial scale.
Journal Reference: DOI: 10.1016/j.mattod.2023.07.022
https://www.lttlabs.com/articles/2026/05/12/ups-exploration
Our company has always had many UPSs around for the convenience and business case of not suddenly losing a ton of work. We've been intrigued to check them out further, but we've been wary of connecting any of them to measurement equipment considering the high voltages involved. There is a serious potential they could damage equipment or ourselves.
Despite all that, we're throwing caution to the wind to check out some UPSs from around the office. There are so many directions that UPS/surge testing could go so this article will cover the test setup and interesting exploration results.
For years workers were taught to endure stress in silence. Now, rising burnout is forcing employers and governments to confront the cost of modern work:
Hayley Hughes said yes to everything. She worked in health care at a Queensland medical centre, managing nine GPs and up to 18 staff, while overseeing a change of ownership.
[...] Over many months of an intense workload, Hayley started to feel physically ill from the stress. She experienced brain fog, a racing heart and insomnia.
[...] The path to burnout recovery can include mental health leave, seeing a doctor, maybe receiving a diagnosis of anxiety or depression, medicating yourself, and returning to work ready to roll again.
Or — like Jeffrey and Hayley — you could change roles, reduce hours or move into less senior or less stressful positions.
[...] While taking control of burnout can help recovery, more people are asking if the onus should be on employers.
With almost half of Australian workers feeling burnt out, experts are asking how workplace culture and systems contribute to, or even cause, exhaustion, and whether systemic change might lead to a reduction in burnout overall.
The question of who is responsible for burnout matters. Whether we define burnout as an individual failing or a systemic one determines how we treat it. And, in turn, determines where the responsibility, and the cost, lands.
Burnout has entered the cultural lexicon with a thoroughness that has outpaced its clinical definition.
It is discussed in podcast episodes and performance reviews, in resignation letters and therapy sessions, on TikTok and in medical journals. Yet despite its ubiquity, or perhaps because of it, there remains no consensus on what burnout actually is and, critically, whose responsibility it is to prevent and treat it.
[...] "From my experience, unless the condition is part of the psychiatric manual, it doesn't exist. Insurers won't recognise burnout," he says. "What happens instead is people take their accrued leave, or [seek a diagnosis of] depression in order to get sick leave."
This pathway comes at a cost. Depression is classified as a disorder of the individual, a medical condition located in the person's brain, body, and history.
When a burned-out worker is diagnosed as depressed, the implied cause shifts from the workplace to the worker. The worker uses their own leave, sees a doctor on their own dime, takes medication, pays for therapy and formulates individual coping strategies.
When they recover, they often return to a workplace unchanged from the one where the injury occurred in the first place.
[...] Longitudinal studies suggest certain personality traits can increase the risk of burnout.
But the data is also clear that, over the long term, personality makes a relatively small impact and workplace culture and expectations are far more significant in determining who burns out.
By framing burnout as an individual worker problem, organisations do not have to examine deeper systemic issues like toxic work cultures, unrealistic expectations, or inadequate support structures.
The employee — not the employer — is paying the price.
[...] "In any service work if you are deeply connected to the cause, you are more at risk of burnout," she says.
Jill scoffs at resilience training, mindfulness, wellness programs and apps, as a satisfactory measure to fix burnout.
"The whole idea of someone being resilient is ridiculous," she says. "To whose standard?"
She sees restorative justice as a model for treating burnout. The worker and employer talk about the conditions that lead to burnout and explore new ways of working that may alter the workplace and make it less harmful for others.
The clearest example in Australia of what happens when governments and institutions accept burnout is their problem to solve is in education.
Teacher burnout in Australia is not new. But it has reached a point where its consequences are too visible and too costly to keep attributing to individual teacher inadequacy.
[...] The National Teacher Workforce Action Plan is a federal government attempt to address burnout on a systemic level. It seeks to do this by reducing workloads, improving retention and increasing teacher support. According to the plan, the key strategies focus on relieving administrative burdens, expanding mentorship and providing financial incentives.
[...] Dr Ben Arnold, an associate professor in educational leadership at Deakin University, says teachers have higher levels of meaning in their work than many others, but it comes at a cost.
"They have higher workload, higher pace, higher cognitive demands, and very high emotional demands. And then there are all these other non-teaching things as well," he says.
These include communication with parents that goes way beyond the usual check-in at parent-teacher night, a greater amount of admin and external testing.
"Teachers often describe earlier decades in Australian education as a period when they experienced greater professional autonomy and public trust," says Arnold, whose research focuses on how education policies and working conditions in schools impact the health, sustainability and diversity of teachers.
Increasing emphasis on performance measurement, accountability, external testing and compliance has introduced additional pressures and administrative demands, he says, and teacher goodwill holds the system together.
[...] "We see there's a link between teacher burnout and student achievement," says Collie. "It is a system thing."
[...] "Mindfulness, taking time off: these can keep burnout at bay. But if you are working in a toxic workplace, you need to address that," he says. "Leaving one toxic workplace for another will not help."
[...] The cleanest individual solutions to burnout — leave the job, take months off, downshift — are available only to those with financial security.
For everyone else, the question of systemic change is not a luxury. It's the only real option.